public health
Artificial Intelligence Applications in Horizon Scanning for Infectious Diseases
Miles, Ian, Wakimoto, Mayumi, Meira, Wagner Jr., Paula, Daniela, Ticiane, Daylene, Rosa, Bruno, Biddulph, Jane, Georgiou, Stelios, Ermida, Valdir
This review explores the integration of Artificial Intelligence into Horizon Scanning, focusing on identifying and responding to emerging threats and opportunities linked to Infectious Diseases. We examine how AI tools can enhance signal detection, data monitoring, scenario analysis, and decision support. We also address the risks associated with AI adoption and propose strategies for effective implementation and governance. The findings contribute to the growing body of Foresight literature by demonstrating the potential and limitations of AI in Public Health preparedness.
- Asia > Japan > Kyūshū & Okinawa > Kyūshū > Kumamoto Prefecture > Kumamoto (0.04)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- South America > Brazil > Minas Gerais (0.04)
- (6 more...)
- Overview (1.00)
- Research Report (0.82)
- North America > United States > Pennsylvania (0.04)
- North America > United States > Minnesota (0.04)
Machine Learning and Public Health: Identifying and Mitigating Algorithmic Bias through a Systematic Review
Altamirano, Sara, Vreeken, Arjan, Ghebreab, Sennay
Machine learning (ML) promises to revolutionize public health through improved surveillance, risk stratification, and resource allocation. However, without systematic attention to algorithmic bias, ML may inadvertently reinforce existing health disparities. We present a systematic literature review of algorithmic bias identification, discussion, and reporting in Dutch public health ML research from 2021 to 2025. To this end, we developed the Risk of Algorithmic Bias Assessment Tool (RABA T) by integrating elements from established frameworks (Cochrane Risk of Bias, PROBAST, Microsoft Responsible AI checklist) and applied it to 35 peer-reviewed studies. Our analysis reveals pervasive gaps: although data sampling and missing data practices are well documented, most studies omit explicit fairness framing, subgroup analyses, and transparent discussion of potential harms. In response, we introduce a four-stage fairness-oriented framework called ACAR (A wareness, Conceptualization, Application, Reporting), with guiding questions derived from our systematic literature review to help researchers address fairness across the ML lifecycle. We conclude with actionable recommendations for public health ML practitioners to consistently consider algorithmic bias and foster transparency, ensuring that algorithmic innovations advance health equity rather than undermine it.
- North America > United States (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.05)
- North America > Canada (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Public Health (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.68)
From Keywords to Clusters: AI-Driven Analysis of YouTube Comments to Reveal Election Issue Salience in 2024
Simoes, Raisa M., Kelly, Timoteo, Simoes, Eduardo J., Rao, Praveen
Abstract: This paper aims to explore two compet ing data science meth odologies to attempt answer ing th e question, " Which issues contributed most to voters' choice in the 2024 presidential election? " The methodologies involve novel empirical evidence driven by artificial intelligence (AI) techniques . By using two distinct methods based on natural language processing and clustering analysis to mine over eight thousand user comments on election - related YouTube videos from one right leaning journal, Wall Street Journal, and one left leaning journal, New York Times, during pre - election week, we quantify the frequency of selected issue areas among user comments to infer which issues were most salient to potential voters in the seven days preceding the November 5th election. Empirically, we primarily demonstrate that immigration and democracy were the most frequently and consistently invoked issues in user comments on the analyzed YouTube videos, followed by the issue of identity politics, while inflation was significantly less frequently referenced. These results corroborate certain findings of post - election surveys but also refute the supposed importance of inflation as an election issue. This indicate s that variations on opinion mining, with their analysis of raw user data online, ca n be more revealing than polling and surveys for analyzing election outcomes. Keywords: artificial intelligence; opinion mining; clustering; vot e choice; cleavages 1. Introduction The Democrats lost both houses of Congress and the Presidency to Republicans in the 2024 election, with former president Donald Trump winning all seven swing states and the national popular vote, despite most pre - election polls giving Vice President Kamala Harris and President Trump a roughly equal chance of winning . Most post - election punditry and analysis in the legacy press and alternative media has attributed the Democrats' large loss to two main issues - inflation [59] and immigration [30] However, a growing contingent of analysts has also attributed the election outcome to the Democratic party's association with cultural issues purportedly distant from the median voter's preferences, such as th ose alternatively aggregated under the concept of "identity" or " woke " politics [54, 56] . To this point, three post - election studies illustrate how voters associated Democrats with left - of - center ideas that were ostensibly distant from most voters' priorities. S urvey research from the think tank Third Way demonstrates that Democrats, and thus Kamala Harris, were largely perceived as "too liberal" [15], while a study from More In Common polling over 5, 000 Americans concluded that while inflation was the top concern for every major demographic group across both parties, Americans misperceived LGBT/transgender policies as the top policy priority for Democrats [37] .
- Europe > Germany (0.14)
- Europe > France (0.14)
- North America > United States > Missouri (0.04)
- (9 more...)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.68)
Linguistic Patterns in Pandemic-Related Content: A Comparative Analysis of COVID-19, Constraint, and Monkeypox Datasets
Sikosana, Mkululi, Maudsley-Barton, Sean, Ajao, Oluwaseun
-- This study conducts a computational linguistic analysis of pandemic - related online discourse to examine how language distinguishes health misinformation from factual communication. Drawing on three corpora -- COVID - 19 false narratives (n = 7,588), general COVID - 19 content (n = 10,700), and Monkeypox - related posts (n = 5,787) -- we identify significant differences in readability, rhetorical markers, and persuasive language use. COVID - 19 misinformation exhibited markedly lower readability scores and contained over twice the frequency of fear - related or persuasi ve terms compared to the other datasets. It also showed minimal use of exclamation marks, contrasting with the more emotive style of Monkeypox content. These patterns suggest that misinformation employs a deliberately complex rhetorical style embedded with em otional cues, a combination that may enhance its perceived credibility. Our findings contribute to the growing body of work on digital health misinformation by highlighting linguistic indicators that may aid detection efforts. They also inform public health messaging strategies and theoretical models of crisis communication in networked media environments. At the same time, the study acknowledges certain limitations, including reliance on traditional readability indices, use of a deliberately narrow persuasive lexicon, and reliance on static aggregate analysis. Future research should therefore incorporate longitudinal designs, broader emotion lexicons, and platform - sensitive approaches to strengthe n robustness. The data and code is available at: https://doi.org/10.5281/zenodo.17024569 The COVID - 19 pandemic challenged global health systems. The proliferation of health - related information on digital platforms accelerates dramatically during public health crises, creating opportunities for rapid knowledge dissemination but also challenges related to misinformation (Sikosana et al., 2024; Sikosana et al., 2025). This dual nature of digital communication became particularly evident during the COVID - 19 pandemic, which sparked an unprecedented volume of online discourse and was accompanied by w hat the World Health Organisation (WHO) termed an "infodemic" - an overabundance of information (both accurate and not) that makes it hard for people to find trustworthy guidance (WHO, 2020). This infodemic phenomenon presents a communication challenge and a substantive threat to public health. Research has shown that exposure to COVID - 19 misinformation can directly impact health behaviours. For example, exposure to false COVID - 19 vaccine information was associated with a reduction in vaccination intent by about 6.4 percentage points in the UK (and a similar 6.2 - point drop in the USA) (Chen et al., 2022; Loomba et al., 2021). Such an effect size is sufficient to undermine herd immunity thresholds.
- North America > United States (0.48)
- Europe > United Kingdom > England > Greater Manchester > Manchester (0.04)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
PHAX: A Structured Argumentation Framework for User-Centered Explainable AI in Public Health and Biomedical Sciences
İlgen, Bahar, Dubey, Akshat, Hattab, Georges
Ensuring transparency and trust in AI-driven public health and biomedical sciences systems requires more than accurate predictions-it demands explanations that are clear, contextual, and socially accountable. While explainable AI (XAI) has advanced in areas like feature attribution and model interpretability, most methods still lack the structure and adaptability needed for diverse health stakeholders, including clinicians, policymakers, and the general public. We introduce PHAX-a Public Health Argumentation and eXplainability framework-that leverages structured argumentation to generate human-centered explanations for AI outputs. PHAX is a multi-layer architecture combining defeasible reasoning, adaptive natural language techniques, and user modeling to produce context-aware, audience-specific justifications. More specifically, we show how argumentation enhances explainability by supporting AI-driven decision-making, justifying recommendations, and enabling interactive dialogues across user types. We demonstrate the applicability of PHAX through use cases such as medical term simplification, patient-clinician communication, and policy justification. In particular, we show how simplification decisions can be modeled as argument chains and personalized based on user expertise-enhancing both interpretability and trust. By aligning formal reasoning methods with communicative demands, PHAX contributes to a broader vision of transparent, human-centered AI in public health.
- Europe > Germany (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Health & Medicine > Public Health (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
A Decision-Language Model (DLM) for Dynamic Restless Multi-Armed Bandit Tasks in Public Health
Restless multi-armed bandits (RMAB) have demonstrated success in optimizing resource allocation for large beneficiary populations in public health settings. Unfortunately, RMAB models lack flexibility to adapt to evolving public health policy priorities. Concurrently, Large Language Models (LLMs) have emerged as adept automated planners across domains of robotic control and navigation. In this paper, we propose a Decision Language Model (DLM) for RMABs, enabling dynamic fine-tuning of RMAB policies in public health settings using human-language commands. We propose using LLMs as automated planners to (1) interpret human policy preference prompts, (2) propose reward functions as code for a multi-agent RMAB environment, and (3) iterate on the generated reward functions using feedback from grounded RMAB simulations. We illustrate the application of DLM in collaboration with ARMMAN, an India-based non-profit promoting preventative care for pregnant mothers, that currently relies on RMAB policies to optimally allocate health worker calls to low-resource populations.
Clarifying Misconceptions in COVID-19 Vaccine Sentiment and Stance Analysis and Their Implications for Vaccine Hesitancy Mitigation: A Systematic Review
Barberia, Lorena G, Lombard, Belinda, Roman, Norton Trevisan, Sousa, Tatiane C. M.
Background Advances in machine learning (ML) models have increased the capability of researchers to detect vaccine hesitancy in social media using Natural Language Processing (NLP). A considerable volume of research has identified the persistence of COVID-19 vaccine hesitancy in discourse shared on various social media platforms. Methods Our objective in this study was to conduct a systematic review of research employing sentiment analysis or stance detection to study discourse towards COVID-19 vaccines and vaccination spread on Twitter (officially known as X since 2023). Following registration in the PROSPERO international registry of systematic reviews, we searched papers published from 1 January 2020 to 31 December 2023 that used supervised machine learning to assess COVID-19 vaccine hesitancy through stance detection or sentiment analysis on Twitter. We categorized the studies according to a taxonomy of five dimensions: tweet sample selection approach, self-reported study type, classification typology, annotation codebook definitions, and interpretation of results. We analyzed if studies using stance detection report different hesitancy trends than those using sentiment analysis by examining how COVID-19 vaccine hesitancy is measured, and whether efforts were made to avoid measurement bias. Results Our review found that measurement bias is widely prevalent in studies employing supervised machine learning to analyze sentiment and stance toward COVID-19 vaccines and vaccination. The reporting errors are sufficiently serious that they hinder the generalisability and interpretation of these studies to understanding whether individual opinions communicate reluctance to vaccinate against SARS-CoV-2. Conclusion Improving the reporting of NLP methods is crucial to addressing knowledge gaps in vaccine hesitancy discourse.
- Asia > South Korea (0.14)
- South America > Brazil > São Paulo (0.05)
- Europe > Switzerland > Basel-City > Basel (0.05)
- (13 more...)
- Health & Medicine > Therapeutic Area > Vaccines (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Information Extraction (0.92)
- Information Technology > Artificial Intelligence > Natural Language > Discourse & Dialogue (0.78)
A review on development of eco-friendly filters in Nepal for use in cigarettes and masks and Air Pollution Analysis with Machine Learning and SHAP Interpretability
Paneru, Bishwash, Paneru, Biplov, Mukhiya, Tanka, Poudyal, Khem Narayan
In Nepal, air pollution is a serious public health concern, especially in cities like Kathmandu where particulate matter (PM2.5 and PM10) has a major influence on respiratory health and air quality. The Air Quality Index (AQI) is predicted in this work using a Random Forest Regressor, and the model's predictions are interpreted using SHAP (SHapley Additive exPlanations) analysis. With the lowest Testing RMSE (0.23) and flawless R2 scores (1.00), CatBoost performs better than other models, demonstrating its greater accuracy and generalization which is cross validated using a nested cross validation approach. NowCast Concentration and Raw Concentration are the most important elements influencing AQI values, according to SHAP research, which shows that the machine learning results are highly accurate. Their significance as major contributors to air pollution is highlighted by the fact that high values of these characteristics significantly raise the AQI. This study investigates the Hydrogen-Alpha (HA) biodegradable filter as a novel way to reduce the related health hazards. With removal efficiency of more than 98% for PM2.5 and 99.24% for PM10, the HA filter offers exceptional defense against dangerous airborne particles. These devices, which are biodegradable face masks and cigarette filters, address the environmental issues associated with traditional filters' non-biodegradable trash while also lowering exposure to air contaminants.
- Overview (1.00)
- Research Report > New Finding (0.92)
- Research Report > Promising Solution (0.67)
"It's a conversation, not a quiz": A Risk Taxonomy and Reflection Tool for LLM Adoption in Public Health
Zhou, Jiawei, Chen, Amy Z., Shah, Darshi, Reese, Laura Schwab, De Choudhury, Munmun
Recent breakthroughs in large language models (LLMs) have generated both interest and concern about their potential adoption as accessible information sources or communication tools across different domains. In public health -- where stakes are high and impacts extend across populations -- adopting LLMs poses unique challenges that require thorough evaluation. However, structured approaches for assessing potential risks in public health remain under-explored. To address this gap, we conducted focus groups with health professionals and health issue experiencers to unpack their concerns, situated across three distinct and critical public health issues that demand high-quality information: vaccines, opioid use disorder, and intimate partner violence. We synthesize participants' perspectives into a risk taxonomy, distinguishing and contextualizing the potential harms LLMs may introduce when positioned alongside traditional health communication. This taxonomy highlights four dimensions of risk in individual behaviors, human-centered care, information ecosystem, and technology accountability. For each dimension, we discuss specific risks and example reflection questions to help practitioners adopt a risk-reflexive approach. This work offers a shared vocabulary and reflection tool for experts in both computing and public health to collaboratively anticipate, evaluate, and mitigate risks in deciding when to employ LLM capabilities (or not) and how to mitigate harm when they are used.
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > Texas (0.04)
- (6 more...)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Health & Medicine > Public Health (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- (2 more...)